Google Pushes for Lighter Copyright Rules and Fewer AI Export Restrictions
In its document, Google urges policymakers to look beyond just the risks of AI and consider how restrictive regulations could harm innovation, competitiveness, and scientific progress. “For too long, AI policymaking has focused mainly on the risks, overlooking the damage that excessive regulations can cause,” the company stated. “Under the new administration, we see a shift toward a more balanced approach.”
A Controversial Stance on Copyright
One of the more contentious aspects of Google’s proposal is its stance on intellectual property. The company insists that AI development depends on “fair use and text-and-data mining exceptions,” which allow models to be trained on publicly available data—including copyrighted material—without heavy restrictions.
Google argues that such exceptions are essential for AI innovation and that training on copyrighted data doesn’t significantly harm content owners. It also highlights the difficulties of negotiating access to datasets, calling these negotiations “highly unpredictable, imbalanced, and lengthy.”
This argument, however, is at the heart of several lawsuits Google is currently facing. Many content creators and media companies accuse the tech giant of using their work without permission or compensation. U.S. courts have yet to determine whether fair use laws truly protect AI developers from copyright claims.
Export Controls: A Battle Over AI Chips
Beyond copyright, Google also challenges certain AI-related export controls put in place by the Biden administration. It warns that these restrictions—designed to limit the sale of advanced AI chips to certain countries—could hurt U.S. cloud service providers and undermine economic competitiveness.
This position contrasts with that of Microsoft, which in January stated that it was “confident” it could comply with the export rules. However, the regulations do include exemptions for trusted businesses that need large quantities of AI chips, providing some flexibility.
Calling for Stronger AI Research Investments
Google’s proposal also emphasizes the need for increased investment in AI research and development. The company pushes back against recent federal efforts to cut funding and instead calls for “long-term, sustained” support for AI innovation. It encourages the government to:
- Release datasets that could benefit AI research.
- Allocate funding for early-stage AI projects.
- Ensure computing resources and AI models are widely available to scientists and institutions.
A Push for Federal AI Legislation
With the U.S. grappling with a patchwork of state-level AI laws, Google is urging lawmakers to pass a unified federal AI framework. The number of pending AI-related bills has surged to 781, creating a complex and inconsistent regulatory landscape.
Google particularly warns against holding AI developers liable for how their models are used, arguing that companies often have little control over how their AI tools are deployed. The company opposed past legislation like California’s SB 1047, which sought to establish clearer legal responsibilities for AI developers.
Concerns Over Transparency Rules
Finally, Google pushes back against strict disclosure requirements being considered in the U.S. and EU. It warns that overly broad transparency rules could expose trade secrets, help competitors copy products, and even pose national security risks by revealing how to bypass AI safeguards.
Despite Google’s concerns, governments worldwide are pushing for more transparency. California’s AB 2013, for example, requires AI companies to disclose high-level summaries of their training datasets. The EU’s AI Act will soon require companies to provide model deployers with detailed information on how their systems work, their limitations, and potential risks.
The Growing AI Regulation Debate
Google’s stance highlights the ongoing struggle between innovation and regulation in the AI industry. While the company sees many regulations as roadblocks, others argue that stronger rules are necessary to protect data rights, security, and ethical AI use. With legal battles mounting and policymakers drafting new laws, the future of AI regulation in the U.S. remains uncertain—but it’s clear the debate is far from over.